36 research outputs found
Hardness Results for Structured Linear Systems
We show that if the nearly-linear time solvers for Laplacian matrices and
their generalizations can be extended to solve just slightly larger families of
linear systems, then they can be used to quickly solve all systems of linear
equations over the reals. This result can be viewed either positively or
negatively: either we will develop nearly-linear time algorithms for solving
all systems of linear equations over the reals, or progress on the families we
can solve in nearly-linear time will soon halt
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
We show how to perform sparse approximate Gaussian elimination for Laplacian
matrices. We present a simple, nearly linear time algorithm that approximates a
Laplacian by a matrix with a sparse Cholesky factorization, the version of
Gaussian elimination for symmetric matrices. This is the first nearly linear
time solver for Laplacian systems that is based purely on random sampling, and
does not use any graph theoretic constructions such as low-stretch trees,
sparsifiers, or expanders. The crux of our analysis is a novel concentration
bound for matrix martingales where the differences are sums of conditionally
independent variables
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O(n^{2.2131}) time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field. This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices, as well as for explicit inversion of block Vandermonde-like matrices with structured blocks. As a further application, we improve the complexity of several algorithms in topological data analysis and in finite group theory
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
We improve the current best running time value to invert sparse matrices over
finite fields, lowering it to an expected time for the
current values of fast rectangular matrix multiplication. We achieve the same
running time for the computation of the rank and nullspace of a sparse matrix
over a finite field. This improvement relies on two key techniques. First, we
adopt the decomposition of an arbitrary matrix into block Krylov and Hankel
matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the
explicit inverse of a block Hankel matrix using low displacement rank
techniques for structured matrices and fast rectangular matrix multiplication
algorithms. We generalize our inversion method to block structured matrices
with other displacement operators and strengthen the best known upper bounds
for explicit inversion of block Toeplitz-like and block Hankel-like matrices,
as well as for explicit inversion of block Vandermonde-like matrices with
structured blocks. As a further application, we improve the complexity of
several algorithms in topological data analysis and in finite group theory
Sampling Random Spanning Trees Faster than Matrix Multiplication
We present an algorithm that, with high probability, generates a random
spanning tree from an edge-weighted undirected graph in
time (The notation hides
factors). The tree is sampled from a distribution
where the probability of each tree is proportional to the product of its edge
weights. This improves upon the previous best algorithm due to Colbourn et al.
that runs in matrix multiplication time, . For the special case of
unweighted graphs, this improves upon the best previously known running time of
for (Colbourn
et al. '96, Kelner-Madry '09, Madry et al. '15).
The effective resistance metric is essential to our algorithm, as in the work
of Madry et al., but we eschew determinant-based and random walk-based
techniques used by previous algorithms. Instead, our algorithm is based on
Gaussian elimination, and the fact that effective resistance is preserved in
the graph resulting from eliminating a subset of vertices (called a Schur
complement). As part of our algorithm, we show how to compute
-approximate effective resistances for a set of vertex pairs via
approximate Schur complements in time,
without using the Johnson-Lindenstrauss lemma which requires time. We
combine this approximation procedure with an error correction procedure for
handing edges where our estimate isn't sufficiently accurate